Implicit Neural Representations (INR) have recently shown to be powerful tool for high-quality video compression. However, existing works are limiting as they do not explicitly exploit the temporal redundancy in videos, leading to a long encoding time. Additionally, these methods have fixed architectures which do not scale to longer videos or higher resolutions. To address these issues, we propose NIRVANA, which treats videos as groups of frames and fits separate networks to each group performing patch-wise prediction. This design shares computation within each group, in the spatial and temporal dimensions, resulting in reduced encoding time of the video. The video representation is modeled autoregressively, with networks fit on a current group initialized using weights from the previous group's model. To further enhance efficiency, we perform quantization of the network parameters during training, requiring no post-hoc pruning or quantization. When compared with previous works on the benchmark UVG dataset, NIRVANA improves encoding quality from 37.36 to 37.70 (in terms of PSNR) and the encoding speed by 12X, while maintaining the same compression rate. In contrast to prior video INR works which struggle with larger resolution and longer videos, we show that our algorithm is highly flexible and scales naturally due to its patch-wise and autoregressive designs. Moreover, our method achieves variable bitrate compression by adapting to videos with varying inter-frame motion. NIRVANA achieves 6X decoding speed and scales well with more GPUs, making it practical for various deployment scenarios.
translated by 谷歌翻译
运动预测是计算机视觉中的经典问题,其旨在预测观察到的姿势序列的未来运动。已经提出了各种深度学习模型,在运动预测上实现了最先进的性能。然而,现有方法通常专注于在姿势空间中建模时间动态。不幸的是,人类运动的复杂和高度的性质带来了动态背景捕获的固有挑战。因此,我们远离传统的基于姿势的表示,并提出采用各个关节的相空间轨迹表示的新方法。此外,目前的方法倾向于仅考虑物理连接的关节之间的依赖性。在本文中,我们介绍了一种小说卷积神经模型,以有效利用明确的运动解剖学知识,并同时捕获关节轨迹动态的空间和时间信息。然后,我们提出了一个全局优化模块,了解各个联合功能之间的隐式关系。经验上,我们的方法在大规模3D人体运动基准数据集(即,Human3.6m,CMU Mocap)上进行评估。这些结果表明,我们的方法在基准数据集中设置了新的最先进状态。我们的代码将在https://github.com/post-group/teid中提供。
translated by 谷歌翻译
为了促进更好的性能带宽权衡,以实现多种代理人的感知,我们提出了一种新颖的蒸馏协作图(光盘),以模拟代理商之间的培训,姿势感知和适应性协作。我们的主要新科特迪斯在两个方面。首先,我们提出了一位教师学生框架通过知识蒸馏训练光盘。教师模型采用与全面查看输入的早期合作;学生模型基于中间协作与单视图输入。我们的框架通过在学生模型中约束协作后的特征地图来列进讨论,以匹配教师模型的对应关系。其次,我们提出了矩阵值的边缘重量。在这样的矩阵中,每个元素将互及的间歇注意力反映在特定空间区域,允许代理自适应地突出显示信息区域。在推论期间,我们只需要使用名为Distilled Collaboration Network的学生模型(Disconet)。归因于师生框架,具有共享Disconet的多个代理商可以协作地与整体视图进行假设教师模型的表现。我们的方法在V2X-SIM 1.0上验证了我们使用Carla和Sumo Co-Simulation合成的大规模多代理感知数据集。我们在多代理3D对象检测中的定量和定性实验表明,Disconet不仅可以实现比最先进的协作的感知方法更好的性能带宽权衡,而且还带来了更直接的设计理由。我们的代码可在https://github.com/ai4ce/disconet上找到。
translated by 谷歌翻译
Fine-tuning pre-trained language models has become the prevalent paradigm for building downstream NLP models. Oftentimes fine-tuned models are readily available but their training data is not, due to data privacy or intellectual property concerns. This creates a barrier to fusing knowledge across individual models to yield a better single model. In this paper, we study the problem of merging individual models built on different training data sets to obtain a single model that performs well both across all data set domains and can generalize on out-of-domain data. We propose a dataless knowledge fusion method that merges models in their parameter space, guided by weights that minimize prediction differences between the merged model and the individual models. Over a battery of evaluation settings, we show that the proposed method significantly outperforms baselines such as Fisher-weighted averaging or model ensembling. Further, we find that our method is a promising alternative to multi-task learning that can preserve or sometimes improve over the individual models without access to the training data. Finally, model merging is more efficient than training a multi-task model, thus making it applicable to a wider set of scenarios.
translated by 谷歌翻译
表格数据是业务应用程序中最常见的数据存储格式之一,范围从零售,银行和电子商务。这些应用在很大程度上依赖机器学习模型来取得业务成功。学习表格数据的关键问题之一是将有影响力的特征与所有预定特征区分开。假设所有实例都具有相同的影响力子集,那么全球功能选择已经进行了很长时间。但是,不同的实例依赖于实践中的不同特征子集,这也引起了实例的特征选择,在最近的研究中受到了越来越多的关注。在本文中,我们首先提出了一种新的方法,以发现表格数据的实例影响特征(DIWIFT),其核心是引入影响函数以衡量实例特征的重要性。 Diwift能够在不同实例中自动发现不同尺寸的影响力子集,这与全局特征选择不同,后者考虑了具有相同影响力特征子集的所有实例。另一方面,与以前的实例功能选择不同,DIWIFT最大程度地减少了验证集的验证损失,因此对于训练数据集和测试数据集中存在的分配变化更为强大,这在表格数据中很重要。最后,我们对合成数据集和现实数据集进行了广泛的实验,以验证我们的diwift的有效性,并将其与基线方法进行了比较。此外,我们还通过一些消融实验来证明我们方法的鲁棒性。
translated by 谷歌翻译
A recent study has shown a phenomenon called neural collapse in that the within-class means of features and the classifier weight vectors converge to the vertices of a simplex equiangular tight frame at the terminal phase of training for classification. In this paper, we explore the corresponding structures of the last-layer feature centers and classifiers in semantic segmentation. Based on our empirical and theoretical analysis, we point out that semantic segmentation naturally brings contextual correlation and imbalanced distribution among classes, which breaks the equiangular and maximally separated structure of neural collapse for both feature centers and classifiers. However, such a symmetric structure is beneficial to discrimination for the minor classes. To preserve these advantages, we introduce a regularizer on feature centers to encourage the network to learn features closer to the appealing structure in imbalanced semantic segmentation. Experimental results show that our method can bring significant improvements on both 2D and 3D semantic segmentation benchmarks. Moreover, our method ranks 1st and sets a new record (+6.8% mIoU) on the ScanNet200 test leaderboard. Code will be available at https://github.com/dvlab-research/Imbalanced-Learning.
translated by 谷歌翻译
Weakly-supervised object localization aims to indicate the category as well as the scope of an object in an image given only the image-level labels. Most of the existing works are based on Class Activation Mapping (CAM) and endeavor to enlarge the discriminative area inside the activation map to perceive the whole object, yet ignore the co-occurrence confounder of the object and context (e.g., fish and water), which makes the model inspection hard to distinguish object boundaries. Besides, the use of CAM also brings a dilemma problem that the classification and localization always suffer from a performance gap and can not reach their highest accuracy simultaneously. In this paper, we propose a casual knowledge distillation method, dubbed KD-CI-CAM, to address these two under-explored issues in one go. More specifically, we tackle the co-occurrence context confounder problem via causal intervention (CI), which explores the causalities among image features, contexts, and categories to eliminate the biased object-context entanglement in the class activation maps. Based on the de-biased object feature, we additionally propose a multi-teacher causal distillation framework to balance the absorption of classification knowledge and localization knowledge during model training. Extensive experiments on several benchmarks demonstrate the effectiveness of KD-CI-CAM in learning clear object boundaries from confounding contexts and addressing the dilemma problem between classification and localization performance.
translated by 谷歌翻译
Witnessing the impressive achievements of pre-training techniques on large-scale data in the field of computer vision and natural language processing, we wonder whether this idea could be adapted in a grab-and-go spirit, and mitigate the sample inefficiency problem for visuomotor driving. Given the highly dynamic and variant nature of the input, the visuomotor driving task inherently lacks view and translation invariance, and the visual input contains massive irrelevant information for decision making, resulting in predominant pre-training approaches from general vision less suitable for the autonomous driving task. To this end, we propose PPGeo (Policy Pre-training via Geometric modeling), an intuitive and straightforward fully self-supervised framework curated for the policy pretraining in visuomotor driving. We aim at learning policy representations as a powerful abstraction by modeling 3D geometric scenes on large-scale unlabeled and uncalibrated YouTube driving videos. The proposed PPGeo is performed in two stages to support effective self-supervised training. In the first stage, the geometric modeling framework generates pose and depth predictions simultaneously, with two consecutive frames as input. In the second stage, the visual encoder learns driving policy representation by predicting the future ego-motion and optimizing with the photometric error based on current visual observation only. As such, the pre-trained visual encoder is equipped with rich driving policy related representations and thereby competent for multiple visuomotor driving tasks. Extensive experiments covering a wide span of challenging scenarios have demonstrated the superiority of our proposed approach, where improvements range from 2% to even over 100% with very limited data. Code and models will be available at https://github.com/OpenDriveLab/PPGeo.
translated by 谷歌翻译
In this work, we focus on instance-level open vocabulary segmentation, intending to expand a segmenter for instance-wise novel categories without mask annotations. We investigate a simple yet effective framework with the help of image captions, focusing on exploiting thousands of object nouns in captions to discover instances of novel classes. Rather than adopting pretrained caption models or using massive caption datasets with complex pipelines, we propose an end-to-end solution from two aspects: caption grounding and caption generation. In particular, we devise a joint Caption Grounding and Generation (CGG) framework based on a Mask Transformer baseline. The framework has a novel grounding loss that performs explicit and implicit multi-modal feature alignments. We further design a lightweight caption generation head to allow for additional caption supervision. We find that grounding and generation complement each other, significantly enhancing the segmentation performance for novel categories. We conduct extensive experiments on the COCO dataset with two settings: Open Vocabulary Instance Segmentation (OVIS) and Open Set Panoptic Segmentation (OSPS). The results demonstrate the superiority of our CGG framework over previous OVIS methods, achieving a large improvement of 6.8% mAP on novel classes without extra caption data. Our method also achieves over 15% PQ improvements for novel classes on the OSPS benchmark under various settings.
translated by 谷歌翻译
Nearest-Neighbor (NN) classification has been proven as a simple and effective approach for few-shot learning. The query data can be classified efficiently by finding the nearest support class based on features extracted by pretrained deep models. However, NN-based methods are sensitive to the data distribution and may produce false prediction if the samples in the support set happen to lie around the distribution boundary of different classes. To solve this issue, we present P3DC-Shot, an improved nearest-neighbor based few-shot classification method empowered by prior-driven data calibration. Inspired by the distribution calibration technique which utilizes the distribution or statistics of the base classes to calibrate the data for few-shot tasks, we propose a novel discrete data calibration operation which is more suitable for NN-based few-shot classification. Specifically, we treat the prototypes representing each base class as priors and calibrate each support data based on its similarity to different base prototypes. Then, we perform NN classification using these discretely calibrated support data. Results from extensive experiments on various datasets show our efficient non-learning based method can outperform or at least comparable to SOTA methods which need additional learning steps.
translated by 谷歌翻译